Skip to main content

Home/ Future of the Web/ Group items tagged connection enough

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

How to speed up your internet connection on Linux | HowtoForge - 0 views

  •  
    "...there isn't a way to transform a slow internet connection into a lighting-speed one if your provider is just not giving you enough bandwidth, no matter what you do. This post is only aiming to provide generic advice on how to make things a little bit better if possible, and if applicable to each case."
  •  
    "...there isn't a way to transform a slow internet connection into a lighting-speed one if your provider is just not giving you enough bandwidth, no matter what you do. This post is only aiming to provide generic advice on how to make things a little bit better if possible, and if applicable to each case."
Gonzalo San Gil, PhD.

The U.N. wants to connect the world to the Internet. That's not enough. - 0 views

  •  
    "But connectivity alone cannot be global policy. Respect for privacy and the freedom of expression must go hand in glove with the drive to connection."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

Is Linux dead for the desktop? - 1 views

  • Linux never had the apps
  • Charles King, an IT analyst who follows enterprise trends, says the big change is in IT. At one time, executives in charge of computing services were mostly concerned with operating systems and applications for massive throng of traditional business users. Those users have now flocked to mobile computing devices, but they still have a Windows PC sitting on their desk.
  • Today, Microsoft's lock (on the desktop, anyway) remains secure, even in the face of Apple's surge," King says. "Ironically enough, though, the open source model remains alive and well but mostly in the development of new standards and development platforms."
  • ...5 more annotations...
  • David Johnson
  • What corporate end users really need is familiarity, consistency and compatibility - something Apple, Microsoft and Google seem more adept at offering."
  • Can desktop Linux OS be saved? Johnson says the best example of how to save Linux OS is the Chrome OS, an all-in-one laptop and desktop offering available through major consumer electronics companies such as LG (with their Chromebase all-in-one) and the Samsung Chromebook 2
  • The problem is that Chrome OS and Android aren't the same as Linux OS on the desktop. It's a complete reinvention. There are few Windows-like productivity apps and no knowledge worker apps designed for keyboard and mouse.
  • All of experts agree - Windows won every battle for the business user.
  •  
    "For executives in charge of desktop deployments in a large company, Linux OS was once hailed as a saviour for corporate end users. With incredibly low pricing - free, with fee-based support plans, for example - distributions such as Ubuntu Desktop and SUSE Linux Enterprise offered a "good enough" user interface, along with plenty of powerful apps and a rich browser. A few years ago, both Dell and HP jumped on the bandwagon; today, they still offer "developer" and "workstation" models that come pre-loaded with a Linux install. Plus, anyone who follows the Linux market knows that Google has reimagined Linux as a user-friendly tablet interface (the wildly popular Android OS) and a browser-only desktop variant (Chrome OS). Linux also shows up on countless connected home gadgets, fitness trackers, watches and other low-cost devices, mostly because OS costs are so low. The desktop computing OS for end users has failed to capture any attention lately, though. Al Gillen, the programme vice president for servers and system software at IDC, says the Linux OS as a computing platform for end users is at least comatose - and probably dead. Yes, it has reemerged on Android and other devices, but it has gone almost completely silent as a competitor to Windows for mass deployment. As they say, you can hear the crickets chirping."
Gonzalo San Gil, PhD.

US consumers don't want web-enabled toasters * The Register - 4 views

  •  
    [And three-quarters downloaded nothing in last quarter By John Oates * Get more from this author Posted in Music and Media, 29th September 2010 11:31 GMT A survey of previous NPD research has found US consumers not fully embracing the connected future so beloved of tech marketeers. Researchers found that for most US users a PC, with email and web browsing, is enough. Although some are using games consoles and smartphones to get online the vast majority are not. ...]
Paul Merrell

EU considers spending €1 billion for satellite broadband technology - Interna... - 0 views

  • The €200 billion economic rescue plan being considered this week by European Union leaders includes a proposal to spend €1 billion on bringing high-speed Internet access to rural areas. The proposal is likely to pit the Continent's telecommunications operators against satellite companies, which say they are uniquely suited to expand the broadband, or high-speed, network to underserved parts of Eastern Europe and the Alps by the end of 2010.
  • But support for the plan by EU government leaders, who begin a two-day meeting to consider the rescue plan Thursday is not assured. The money would come from unspent funds in the current EU budget, which under EU rules normally revert back to member countries. Germany, which contributes the most to the EU budget and stands to get the largest refund if the project is rejected, opposes the expenditure.
  • Across the EU, 21.7 percent of residents had broadband Internet access in July, according to the commission; 107.6 million received service from a telephone DSL line or a cable television connection and 130,592 via satellite. Only 6 percent of EU residents on average received broadband via mobile phones.
  • ...1 more annotation...
  • Until now, Baugh said, satellite broadband had been hindered by the relatively high cost of the hardware consumers needed to gain access to the service. But recent advances have lowered the cost to roughly €400, including installation, from several thousand euros a few years ago. At about €30 a month, service packages are comparable to those of DSL and cable.
  •  
    A billion Euros is chicken feed compared to other portions of the E.U. economic stimulus initiatives in the works that respond to the major recession under way. Still, this could be a significant foot in the door for satellite broadband in the E.U., perhaps enough to build out the infrastructure enough for a more serious challenge to cable and telephony broadband. But I wonder if there would be enough redundancy enabled by only a billion Euros to gracefully handle a satellite's death if it has far more broadband users.
Paul Merrell

The Wifi Alliance, Coming Soon to Your Neighborhood: 5G Wireless | Global Research - Ce... - 0 views

  • Just as any new technology claims to offer the most advanced development; that their definition of progress will cure society’s ills or make life easier by eliminating the drudgery of antiquated appliances, the Wifi Alliance  was organized as a worldwide wireless network to connect ‘everyone and everything, everywhere” as it promised “improvements to nearly every aspect of daily life.”    The Alliance, which makes no pretense of potential health or environmental concerns, further proclaimed (and they may be correct) that there are “more wifi devices than people on earth”.   It is that inescapable exposure to ubiquitous wireless technologies wherein lies the problem.   
  • Even prior to the 1997 introduction of commercially available wifi devices which has saturated every industrialized country, EMF wifi hot spots were everywhere.  Today with the addition of cell and cordless phones and towers, broadcast antennas, smart meters and the pervasive computer wifi, both adults and especially vulnerable children are surrounded 24-7 by an inescapable presence with little recognition that all radiation exposure is cumulative.    
  • The National Toxicology Program (NTP), a branch of the US National Institute for Health (NIH), conducted the world’s largest study on radiofrequency radiation used by the US telecommunications industry and found a ‘significantly statistical increase in brain and heart cancers” in animals exposed to EMF (electromagnetic fields).  The NTP study confirmed the connection between mobile and wireless phone use and human brain cancer risks and its conclusions were supported by other epidemiological peer-reviewed studies.  Of special note is that studies citing the biological risk to human health were below accepted international exposure standards.    
  •  
    ""…what this means is that the current safety standards as off by a factor of about 7 million.' Pointing out that a recent FCC Chair was a former lobbyist for the telecom industry, "I know how they've attacked various people.  In the U.S. … the funding for the EMF research [by the Environmental Protection Agency] was cut off starting in 1986 … The U.S. Office of Naval Research had been funding a fair amount of research in this area [in the '70s]. They [also] … stopped funding new grants in 1986 …  And then the NIH a few years later followed the same path …" As if all was not reason enough for concern or even downright panic,  the next generation of wireless technology known as 5G (fifth generation), representing the innocuous sounding Internet of Things, promises a quantum leap in power and exceedingly more damaging health impacts with mandatory exposures.      The immense expansion of radiation emissions from the current wireless EMF frequency band and 5G about to be perpetrated on an unsuspecting American public should be criminal.  Developed by the US military as non lethal perimeter and crowd control, the Active Denial System emits a high density, high frequency wireless radiation comparable to 5G and emits radiation in the neighborhood of 90 GHz.    The current Pre 5G, frequency band emissions used in today's commercial wireless range is from 300 Mhz to 3 GHZ as 5G will become the first wireless system to utilize millimeter waves with frequencies ranging from 30 to 300 GHz. One example of the differential is that a current LANS (local area network system) uses 2.4 GHz.  Hidden behind these numbers is an utterly devastating increase in health effects of immeasurable impacts so stunning as to numb the senses. In 2017, the international Environmental Health Trust recommended an EU moratorium "on the roll-out of the fifth generation, 5G, for telecommunication until potential hazards for human health and the environment hav
Paul Merrell

Senate narrowly rejects new FBI surveillance | TheHill - 0 views

  • The Senate narrowly rejected expanding the FBI's surveillance powers Wednesday in the wake of the worst mass shooting in U.S. history.  Senators voted 58-38 on a procedural hurdle, with 60 votes needed to move forward. Majority Leader Mitch McConnellMitch McConnellOvernight Finance: Wall Street awaits Brexit result | Clinton touts biz support | New threat to Puerto Rico bill? | Dodd, Frank hit back The Trail 2016: Berning embers McConnell quashes Senate effort on guns MORE, who initially voted "yes," switched his vote, which allows him to potentially bring the measure back up. 
  • The Senate GOP proposal—being offered as an amendment to the Commerce, Justice and Science appropriations bill—would allow the FBI to use "national security letters" to obtain people's internet browsing history and other information without a warrant during a terrorism or federal intelligence probe.  It would also permanently extend a Patriot Act provision — currently set to expire in 2019 — meant to monitor "lone wolf" extremists.  Senate Republicans said they would likely be able to get enough votes if McConnell schedules a redo.
  • Asked if he anticipates supporters will be able to get 60 votes, Sen. John CornynJohn CornynSenate to vote on two gun bills Senate Dems rip GOP on immigration ruling Post Orlando, hawks make a power play MORE (R-Texas) separately told reporters "that's certainly my expectation." McConnell urged support for the proposal earlier Wednesday, saying it would give the FBI to "connect the dots" in terrorist investigations.  "We can focus on defeating [the Islamic State in Iraq and Syria] or we can focus on partisan politics. Some of our colleagues many think this is all some game," he said. "I believe this is a serious moment that calls for serious solutions."  But Democrats—and some Republicans—raised concerns that the changes didn't go far enough to ensure Americans' privacy.  Sen. Ron WydenRon WydenPost Orlando, hawks make a power play Democrats seize spotlight with sit-in on guns Democrats stage sit-in on House floor to push for gun vote MORE (D-Ore.) blasted his colleagues for "hypocrisy" after a gunman killed 49 people and injured dozens more during the mass shooting in Orlando, Fla. "Due process ought to apply as it relates to guns, but due process wouldn't apply as it relates to the internet activity of millions of Americans," he said ahead of Wednesday's vote. "Supporters of this amendment...have suggested that Americans need to choose between protecting our security and protecting our constitutional right to privacy." 
  • ...1 more annotation...
  • The American Civil Liberties Union (ACLU) also came out in opposition the Senate GOP proposal on Tuesday, warning it would urge lawmakers to vote against it. 
  •  
    Too close for comfort and coming around the bernd again. 
Paul Merrell

Google Chrome Listening In To Your Room Shows The Importance Of Privacy Defense In Depth - 0 views

  • Yesterday, news broke that Google has been stealth downloading audio listeners onto every computer that runs Chrome, and transmits audio data back to Google. Effectively, this means that Google had taken itself the right to listen to every conversation in every room that runs Chrome somewhere, without any kind of consent from the people eavesdropped on. In official statements, Google shrugged off the practice with what amounts to “we can do that”.It looked like just another bug report. "When I start Chromium, it downloads something." Followed by strange status information that notably included the lines "Microphone: Yes" and "Audio Capture Allowed: Yes".
  • Without consent, Google’s code had downloaded a black box of code that – according to itself – had turned on the microphone and was actively listening to your room.A brief explanation of the Open-source / Free-software philosophy is needed here. When you’re installing a version of GNU/Linux like Debian or Ubuntu onto a fresh computer, thousands of really smart people have analyzed every line of human-readable source code before that operating system was built into computer-executable binary code, to make it common and open knowledge what the machine actually does instead of trusting corporate statements on what it’s supposed to be doing. Therefore, you don’t install black boxes onto a Debian or Ubuntu system; you use software repositories that have gone through this source-code audit-then-build process. Maintainers of operating systems like Debian and Ubuntu use many so-called “upstreams” of source code to build the final product.Chromium, the open-source version of Google Chrome, had abused its position as trusted upstream to insert lines of source code that bypassed this audit-then-build process, and which downloaded and installed a black box of unverifiable executable code directly onto computers, essentially rendering them compromised. We don’t know and can’t know what this black box does. But we see reports that the microphone has been activated, and that Chromium considers audio capture permitted.
  • This was supposedly to enable the “Ok, Google” behavior – that when you say certain words, a search function is activated. Certainly a useful feature. Certainly something that enables eavesdropping of every conversation in the entire room, too.Obviously, your own computer isn’t the one to analyze the actual search command. Google’s servers do. Which means that your computer had been stealth configured to send what was being said in your room to somebody else, to a private company in another country, without your consent or knowledge, an audio transmission triggered by… an unknown and unverifiable set of conditions.Google had two responses to this. The first was to introduce a practically-undocumented switch to opt out of this behavior, which is not a fix: the default install will still wiretap your room without your consent, unless you opt out, and more importantly, know that you need to opt out, which is nowhere a reasonable requirement. But the second was more of an official statement following technical discussions on Hacker News and other places. That official statement amounted to three parts (paraphrased, of course):
  • ...4 more annotations...
  • 1) Yes, we’re downloading and installing a wiretapping black-box to your computer. But we’re not actually activating it. We did take advantage of our position as trusted upstream to stealth-insert code into open-source software that installed this black box onto millions of computers, but we would never abuse the same trust in the same way to insert code that activates the eavesdropping-blackbox we already downloaded and installed onto your computer without your consent or knowledge. You can look at the code as it looks right now to see that the code doesn’t do this right now.2) Yes, Chromium is bypassing the entire source code auditing process by downloading a pre-built black box onto people’s computers. But that’s not something we care about, really. We’re concerned with building Google Chrome, the product from Google. As part of that, we provide the source code for others to package if they like. Anybody who uses our code for their own purpose takes responsibility for it. When this happens in a Debian installation, it is not Google Chrome’s behavior, this is Debian Chromium’s behavior. It’s Debian’s responsibility entirely.3) Yes, we deliberately hid this listening module from the users, but that’s because we consider this behavior to be part of the basic Google Chrome experience. We don’t want to show all modules that we install ourselves.
  • If you think this is an excusable and responsible statement, raise your hand now.Now, it should be noted that this was Chromium, the open-source version of Chrome. If somebody downloads the Google product Google Chrome, as in the prepackaged binary, you don’t even get a theoretical choice. You’re already downloading a black box from a vendor. In Google Chrome, this is all included from the start.This episode highlights the need for hard, not soft, switches to all devices – webcams, microphones – that can be used for surveillance. A software on/off switch for a webcam is no longer enough, a hard shield in front of the lens is required. A software on/off switch for a microphone is no longer enough, a physical switch that breaks its electrical connection is required. That’s how you defend against this in depth.
  • Of course, people were quick to downplay the alarm. “It only listens when you say ‘Ok, Google’.” (Ok, so how does it know to start listening just before I’m about to say ‘Ok, Google?’) “It’s no big deal.” (A company stealth installs an audio listener that listens to every room in the world it can, and transmits audio data to the mothership when it encounters an unknown, possibly individually tailored, list of keywords – and it’s no big deal!?) “You can opt out. It’s in the Terms of Service.” (No. Just no. This is not something that is the slightest amount of permissible just because it’s hidden in legalese.) “It’s opt-in. It won’t really listen unless you check that box.” (Perhaps. We don’t know, Google just downloaded a black box onto my computer. And it may not be the same black box as was downloaded onto yours. )Early last decade, privacy activists practically yelled and screamed that the NSA’s taps of various points of the Internet and telecom networks had the technical potential for enormous abuse against privacy. Everybody else dismissed those points as basically tinfoilhattery – until the Snowden files came out, and it was revealed that precisely everybody involved had abused their technical capability for invasion of privacy as far as was possible.Perhaps it would be wise to not repeat that exact mistake. Nobody, and I really mean nobody, is to be trusted with a technical capability to listen to every room in the world, with listening profiles customizable at the identified-individual level, on the mere basis of “trust us”.
  • Privacy remains your own responsibility.
  •  
    And of course, Google would never succumb to a subpoena requiring it to turn over the audio stream to the NSA. The Tor Browser just keeps looking better and better. https://www.torproject.org/projects/torbrowser.html.en
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 1 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Gary Edwards

Microsoft Office whips Google Docs: It's finally game over | Computerworld Blogs - 0 views

  •  
    "If there was ever any doubt about whether Microsoft or Google would win the war of office suites, there should be no longer. Within the last several weeks, Microsoft has pulled so far ahead that it's game over. Here's why. When it comes to which suite is more fully featured, there's never been any real debate: Microsoft Office wins hands down. Whether you're creating entire presentations, creating complicated word-processing documents, or even doing something as simple as handling text attributes, Office is a far better tool. Until the last few weeks, Google Docs had one significant advantage over Microsoft Office: It's available for Android and the iPad as well as PCs because it's Web-based. The same wasn't the case for Office. So if you wanted to use an office suite on all your mobile devices, Google Docs was the way to go. Google Docs lost that advantage when Microsoft released Office for the iPad. There's not yet a native version for Android tablets, but Microsoft is working on that, telling GeekWire, "Let me tell you conclusively: Yes, we are also building Android native applications for tablets for Word, Excel and PowerPoint." Google Docs is still superior to Office's Web-based version, but that's far less important than it used to be. There's no need to go with a Web-based office suite if a superior suite is available as a native apps on all platforms, mobile or otherwise. And Office's collaboration capabilities are quite considerable now. Of course, there's always the question of price. Google Docs is free. Microsoft Office isn't. But at $100 a year for up to five devices, or $70 a year for two, no one will be going broke paying for Microsoft Office. It's worth paying that relatively small price for a much better office suite. Google Docs won't die. It'll be around as second fiddle for a long time. But that's what it will always remain: a second fiddle to the better Microsoft Office."
  •  
    Google acquired "Writely", a small company in Portola Valley that pioneered document editing in a browser. Writely was perhaps the first cloud computing editor to go beyond simple HTML; eventually crafting some really cool CSS-JavaScript-JSON document layout and editing methods. But it can't edit native MSOffice documents. It converts them. There are more than a few problems with the Google Docs approach to editing advanced "compound" documents, but two stick out and are certain to give pause to anyone making the great transition from local workgroup computing, to the highly mobile, always connected, cloud computing. The first problem certain to become a show stopper is that Google converts documents to their native on-line format for editing and collaboration. And then they convert back. To many this isn't a problem. But if the document is part of a workflow or business process, conversion is a killer. There is an old saw affectionately known as "Reuters Law", dating back to the ODF-OXML document wars, that emphatically states; "Conversion breaks documents." The breakage includes both the visual layout of the document, and, the "compound" aspects and data connections that are internal to the document. Think of this way. A business document that is part of a legacy Windows Workgroup workflow is opened up in gDocs. Google converts the document for editing purposes. The data and the workflow internals that bind the document to the local business system are broken on conversion. The look of the document is also visually shredded as the gDocs layout engine is applied. For all practical purposes, no matter what magic editing and collaboration value is added, a broken document means a broken business process. Let me say that again, with the emphasis of having witnessed this first hand during the year long ODF transition trials the Commonwealth of Massachusetts conducted in 2005 and 2006. The business process broke every time a conversion was conducted "on a busines
Paul Merrell

The coming merge of human and machine intelligence - 0 views

  • Now, as the Internet revolution unfolds, we are seeing not merely an extension of mind but a unity of mind and machine, two networks coming together as one. Our smaller brains are in a quest to bypass nature's intent and grow larger by proxy. It is not a stretch of the imagination to believe we will one day have all of the world's information embedded in our minds via the Internet.
  • BCI stands for brain-computer interface, and Jan is one of only a few people on earth using this technology, through two implanted chips attached directly to the neurons in her brain. The first human brain implant was conceived of by John Donoghue, a neuroscientist at Brown University, and implanted in a paralyzed man in 2004. These dime-sized computer chips use a technology called BrainGate that directly connects the mind to computers and the Internet. Having served as chairman of the BrainGate company, I have personally witnessed just how profound this innovation is. BrainGate is an invention that allows people to control electrical devices with nothing but their thoughts. The BrainGate chip is implanted in the brain and attached to connectors outside of the skull, which are hooked up to computers that, in Jan Scheuermann's case, are linked to a robotic arm. As a result, Scheuermann can feed herself chocolate by controlling the robotic arm with nothing but her thoughts.
  • Mind meld But imagine the ways in which the world will change when any of us, disabled or not, can connect our minds to computers.
  • ...2 more annotations...
  • Back in 2004, Google's founders told Playboy magazine that one day we'd have direct access to the Internet through brain implants, with "the entirety of the world's information as just one of our thoughts." A decade later, the road map is taking shape. While it may be years before implants like BrainGate are safe enough to be commonplace—they require brain surgery, after all—there are a host of brainwave sensors in development for use outside of the skull that will be transformational for all of us: caps for measuring driver alertness, headbands for monitoring sleep, helmets for controlling video games. This could lead to wearable EEGs, implantable nanochips or even technology that can listen to our brain signals using the electromagnetic waves that pervade the air we breathe. Just as human intelligence is expanding in the direction of the Internet, the Internet itself promises to get smarter and smarter. In fact, it could prove to be the basis of the machine intelligence that scientists have been racing toward since the 1950s.
  • Neurons may be good analogs for transistors and maybe even computer chips, but they're not good building blocks of intelligence. The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn't allow BrainGate patients to move a robotic arm or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network. It is for this reason that the Internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like neurons as we once envisioned them. But the Internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes. Even though the Internet is at an early stage in its evolution, it can leverage the brain that nature has given us. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines. It took millions of years for humans to gain intelligence, but with the human mind as a guide, it may only take a century to create Internet intelligence.
  •  
    Of course once the human brain is interfaced with the internet, then we will be able to do the Vulcan mind-meld thing. And NSA will be busily crawling the Internet for fresh brain dumps to their data center, which then encompasses the entire former state of Utah. Conventional warfare is a thing of the past as the cyberwar commands of great powers battle for control of the billions of minds making up BrainNet, the internet's successor.  Meanwhile, a hackers' Reaper malware trawls BrainNet for bank account numbers and paswords that it forwards for automated harvesting of personal funds. "Ah, Houston ... we have a problem ..."  
Paul Merrell

In Hearing on Internet Surveillance, Nobody Knows How Many Americans Impacted in Data C... - 0 views

  • The Senate Judiciary Committee held an open hearing today on the FISA Amendments Act, the law that ostensibly authorizes the digital surveillance of hundreds of millions of people both in the United States and around the world. Section 702 of the law, scheduled to expire next year, is designed to allow U.S. intelligence services to collect signals intelligence on foreign targets related to our national security interests. However—thanks to the leaks of many whistleblowers including Edward Snowden, the work of investigative journalists, and statements by public officials—we now know that the FISA Amendments Act has been used to sweep up data on hundreds of millions of people who have no connection to a terrorist investigation, including countless Americans. What do we mean by “countless”? As became increasingly clear in the hearing today, the exact number of Americans impacted by this surveillance is unknown. Senator Franken asked the panel of witnesses, “Is it possible for the government to provide an exact count of how many United States persons have been swept up in Section 702 surveillance? And if not the exact count, then what about an estimate?”
  • The lack of information makes rigorous oversight of the programs all but impossible. As Senator Franken put it in the hearing today, “When the public lacks even a rough sense of the scope of the government’s surveillance program, they have no way of knowing if the government is striking the right balance, whether we are safeguarding our national security without trampling on our citizens’ fundamental privacy rights. But the public can’t know if we succeed in striking that balance if they don’t even have the most basic information about our major surveillance programs."  Senator Patrick Leahy also questioned the panel about the “minimization procedures” associated with this type of surveillance, the privacy safeguard that is intended to ensure that irrelevant data and data on American citizens is swiftly deleted. Senator Leahy asked the panel: “Do you believe the current minimization procedures ensure that data about innocent Americans is deleted? Is that enough?”  David Medine, who recently announced his pending retirement from the Privacy and Civil Liberties Oversight Board, answered unequivocally:
  • Elizabeth Goitein, the Brennan Center director whose articulate and thought-provoking testimony was the highlight of the hearing, noted that at this time an exact number would be difficult to provide. However, she asserted that an estimate should be possible for most if not all of the government’s surveillance programs. None of the other panel participants—which included David Medine and Rachel Brand of the Privacy and Civil Liberties Oversight Board as well as Matthew Olsen of IronNet Cybersecurity and attorney Kenneth Wainstein—offered an estimate. Today’s hearing reaffirmed that it is not only the American people who are left in the dark about how many people or accounts are impacted by the NSA’s dragnet surveillance of the Internet. Even vital oversight committees in Congress like the Senate Judiciary Committee are left to speculate about just how far-reaching this surveillance is. It's part of the reason why we urged the House Judiciary Committee to demand that the Intelligence Community provide the public with a number. 
  • ...2 more annotations...
  • Senator Leahy, they don’t. The minimization procedures call for the deletion of innocent Americans’ information upon discovery to determine whether it has any foreign intelligence value. But what the board’s report found is that in fact information is never deleted. It sits in the databases for 5 years, or sometimes longer. And so the minimization doesn’t really address the privacy concerns of incidentally collected communications—again, where there’s been no warrant at all in the process… In the United States, we simply can’t read people’s emails and listen to their phone calls without court approval, and the same should be true when the government shifts its attention to Americans under this program. One of the most startling exchanges from the hearing today came toward the end of the session, when Senator Dianne Feinstein—who also sits on the Intelligence Committee—seemed taken aback by Ms. Goitein’s mention of “backdoor searches.” 
  • Feinstein: Wow, wow. What do you call it? What’s a backdoor search? Goitein: Backdoor search is when the FBI or any other agency targets a U.S. person for a search of data that was collected under Section 702, which is supposed to be targeted against foreigners overseas. Feinstein: Regardless of the minimization that was properly carried out. Goitein: Well the data is searched in its unminimized form. So the FBI gets raw data, the NSA, the CIA get raw data. And they search that raw data using U.S. person identifiers. That’s what I’m referring to as backdoor searches. It’s deeply concerning that any member of Congress, much less a member of the Senate Judiciary Committee and the Senate Intelligence Committee, might not be aware of the problem surrounding backdoor searches. In April 2014, the Director of National Intelligence acknowledged the searches of this data, which Senators Ron Wyden and Mark Udall termed “the ‘back-door search’ loophole in section 702.” The public was so incensed that the House of Representatives passed an amendment to that year's defense appropriations bill effectively banning the warrantless backdoor searches. Nonetheless, in the hearing today it seemed like Senator Feinstein might not recognize or appreciate the serious implications of allowing U.S. law enforcement agencies to query the raw data collected through these Internet surveillance programs. Hopefully today’s testimony helped convince the Senator that there is more to this topic than what she’s hearing in jargon-filled classified security briefings.
  •  
    The 4th Amendment: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and *particularly describing the place to be searched, and the* persons or *things to be seized."* So much for the particularized description of the place to be searched and the thngs to be seized.  Fah! Who needs a Constitution, anyway .... 
Paul Merrell

Google Says Website Encryption Will Now Influence Search Rankings - 0 views

  • Google will begin using website encryption, or HTTPS, as a ranking signal – a move which should prompt website developers who have dragged their heels on increased security measures, or who debated whether their website was “important” enough to require encryption, to make a change. Initially, HTTPS will only be a lightweight signal, affecting fewer than 1% of global queries, says Google. That means that the new signal won’t carry as much weight as other factors, including the quality of the content, the search giant noted, as Google means to give webmasters time to make the switch to HTTPS. Over time, however, encryption’s effect on search ranking make strengthen, as the company places more importance on website security. Google also promises to publish a series of best practices around TLS (HTTPS, is also known as HTTP over TLS, or Transport Layer Security) so website developers can better understand what they need to do in order to implement the technology and what mistakes they should avoid. These tips will include things like what certificate type is needed, how to use relative URLs for resources on the same secure domain, best practices around allowing for site indexing, and more.
  • In addition, website developers can test their current HTTPS-enabled website using the Qualys Lab tool, says Google, and can direct further questions to Google’s Webmaster Help Forums where the company is already in active discussions with the broader community. The announcement has drawn a lot of feedback from website developers and those in the SEO industry – for instance, Google’s own blog post on the matter, shared in the early morning hours on Thursday, is already nearing 1,000 comments. For the most part, the community seems to support the change, or at least acknowledge that they felt that something like this was in the works and are not surprised. Google itself has been making moves to better securing its own traffic in recent months, which have included encrypting traffic between its own servers. Gmail now always uses an encrypted HTTPS connection which keeps mail from being snooped on as it moves from a consumer’s machine to Google’s data centers.
  • While HTTPS and site encryption have been a best practice in the security community for years, the revelation that the NSA has been tapping the cables, so to speak, to mine user information directly has prompted many technology companies to consider increasing their own security measures, too. Yahoo, for example, also announced in November its plans to encrypt its data center traffic. Now Google is helping to push the rest of the web to do the same.
  •  
    The Internet continues to harden in the wake of the NSA revelations. This is a nice nudge by Google.
Paul Merrell

Use Tor or 'EXTREMIST' Tails Linux? Congrats, you're on the NSA's list * The Register - 0 views

  • Alleged leaked documents about the NSA's XKeyscore snooping software appear to show the paranoid agency is targeting Tor and Tails users, Linux Journal readers – and anyone else interested in online privacy.Apparently, this configuration file for XKeyscore is in the divulged data, which was obtained and studied by members of the Tor project and security specialists for German broadcasters NDR and WDR. <a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7ZK6qwQrMkAACSrTugAAAP1&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank"> <img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7ZK6qwQrMkAACSrTugAAAP1&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""></a> In their analysis of the alleged top-secret documents, they claim the NSA is, among other things:Specifically targeting Tor directory servers Reading email contents for mentions of Tor bridges Logging IP addresses used to search for privacy-focused websites and software And possibly breaking international law in doing so. We already know from leaked Snowden documents that Western intelligence agents hate Tor for its anonymizing abilities. But what the aforementioned leaked source code, written in a rather strange custom language, shows is that not only is the NSA targeting the anonymizing network Tor specifically, it is also taking digital fingerprints of any netizens who are remotely interested in privacy.
  • These include readers of the Linux Journal site, anyone visiting the website for the Tor-powered Linux operating system Tails – described by the NSA as "a comsec mechanism advocated by extremists on extremist forums" – and anyone looking into combining Tails with the encryption tool Truecrypt.If something as innocuous as Linux Journal is on the NSA's hit list, it's a distinct possibility that El Reg is too, particularly in light of our recent exclusive report on GCHQ – which led to a Ministry of Defence advisor coming round our London office for a chat.
  • If you take even the slightest interest in online privacy or have Googled a Linux Journal article about a broken package, you are earmarked in an NSA database for further surveillance, according to these latest leaks.This is assuming the leaked file is genuine, of course.Other monitored sites, we're told, include HotSpotShield, FreeNet, Centurian, FreeProxies.org, MegaProxy, privacy.li and an anonymous email service called MixMinion. The IP address of computer users even looking at these sites is recorded and stored on the NSA's servers for further analysis, and it's up to the agency how long it keeps that data.The XKeyscore code, we're told, includes microplugins that target Tor servers in Germany, at MIT in the United States, in Sweden, in Austria, and in the Netherlands. In doing so it may not only fall foul of German law but also the US's Fourth Amendment.
  • ...2 more annotations...
  • The nine Tor directory servers receive especially close monitoring from the NSA's spying software, which states the "goal is to find potential Tor clients connecting to the Tor directory servers." Tor clients linking into the directory servers are also logged."This shows that Tor is working well enough that Tor has become a target for the intelligence services," said Sebastian Hahn, who runs one of the key Tor servers. "For me this means that I will definitely go ahead with the project.”
  • While the German reporting team has published part of the XKeyscore scripting code, it doesn't say where it comes from. NSA whistleblower Edward Snowden would be a logical pick, but security experts are not so sure."I do not believe that this came from the Snowden documents," said security guru Bruce Schneier. "I also don't believe the TAO catalog came from the Snowden documents. I think there's a second leaker out there."If so, the NSA is in for much more scrutiny than it ever expected.
Paul Merrell

Open Access Can't Wait. Pass FASTR Now. | Electronic Frontier Foundation - 1 views

  • When you pay for federally funded research, you should be allowed to read it. That’s the idea behind the Fair Access to Science and Technology Research Act (S.1701, H.R.3427), which was recently reintroduced in both houses of Congress. FASTR was first introduced in 2013, and while it has strong support in both parties, it has never gained enough momentum to pass. We need to change that. Let’s tell Congress that passing an open access law should be a top priority.
  • Tell Congress: It’s time to move FASTR The proposal is pretty simple: Under FASTR, every federal agency that spends more than $100 million on grants for research would be required to adopt an open access policy. The bill gives each agency flexibility to implement an open access policy suited to the work it funds, so long as research is available to the public after an “embargo period” of a year or less. One of the major points of contention around FASTR is how long that embargo period should be. Last year, the Senate Homeland Security and Governmental Affairs Committee approved FASTR unanimously, but only after extending that embargo period from six months to 12, putting FASTR in line with the 2013 White House open access memo. That’s the version that was recently reintroduced in the Senate.  The House bill, by contrast, sets the embargo period at six months. EFF supports a shorter period. Part of what’s important about open access is that it democratizes knowledge: when research is available to the public, you don’t need expensive journal subscriptions or paid access to academic databases in order to read it. A citizen scientist can use and build on the same body of knowledge as someone with institutional connections. But in the fast-moving world of scientific research, 12 months is an eternity. A shorter embargo is far from a radical proposition, especially in 2017. The landscape for academic publishing is very different from what it was when FASTR was first introduced, thanks in larger part to nongovernmental funders who already enforce open access mandates. Major foundations like Ford, Gates, and Hewlett have adopted strong open access policies requiring that research be not only available to the public, but also licensed to allow republishing and reuse by anyone.
  • Just last year, the Gates Foundation made headlines when it dropped the embargo period from its policy entirely, requiring that research be published openly immediately. After a brief standoff, major publishers began to accommodate Gates’ requirements. As a result, we finally have public confirmation of what we’ve always known: open access mandates don’t put publishers out of business; they push them to modernize their business models. Imagine how a strong open access mandate for government-funded research—with a requirement that that research be licensed openly—could transform publishing. FASTR may not be that law, but it’s a huge step in the right direction, and it’s the best option on the table today. Let’s urge Congress to pass a version of FASTR with an embargo period of six months or less, and then use it as a foundation for stronger open access in the future.
Paul Merrell

The Million Dollar Dissident: NSO Group's iPhone Zero-Days used against a UAE Human Rig... - 0 views

  • 1. Executive Summary Ahmed Mansoor is an internationally recognized human rights defender, based in the United Arab Emirates (UAE), and recipient of the Martin Ennals Award (sometimes referred to as a “Nobel Prize for human rights”).  On August 10 and 11, 2016, Mansoor received SMS text messages on his iPhone promising “new secrets” about detainees tortured in UAE jails if he clicked on an included link. Instead of clicking, Mansoor sent the messages to Citizen Lab researchers.  We recognized the links as belonging to an exploit infrastructure connected to NSO Group, an Israel-based “cyber war” company that sells Pegasus, a government-exclusive “lawful intercept” spyware product.  NSO Group is reportedly owned by an American venture capital firm, Francisco Partners Management. The ensuing investigation, a collaboration between researchers from Citizen Lab and from Lookout Security, determined that the links led to a chain of zero-day exploits (“zero-days”) that would have remotely jailbroken Mansoor’s stock iPhone 6 and installed sophisticated spyware.  We are calling this exploit chain Trident.  Once infected, Mansoor’s phone would have become a digital spy in his pocket, capable of employing his iPhone’s camera and microphone to snoop on activity in the vicinity of the device, recording his WhatsApp and Viber calls, logging messages sent in mobile chat apps, and tracking his movements.   We are not aware of any previous instance of an iPhone remote jailbreak used in the wild as part of a targeted attack campaign, making this a rare find.
  • The Trident Exploit Chain: CVE-2016-4657: Visiting a maliciously crafted website may lead to arbitrary code execution CVE-2016-4655: An application may be able to disclose kernel memory CVE-2016-4656: An application may be able to execute arbitrary code with kernel privileges Once we confirmed the presence of what appeared to be iOS zero-days, Citizen Lab and Lookout quickly initiated a responsible disclosure process by notifying Apple and sharing our findings. Apple responded promptly, and notified us that they would be addressing the vulnerabilities. We are releasing this report to coincide with the availability of the iOS 9.3.5 patch, which blocks the Trident exploit chain by closing the vulnerabilities that NSO Group appears to have exploited and sold to remotely compromise iPhones. Recent Citizen Lab research has shown that many state-sponsored spyware campaigns against civil society groups and human rights defenders use “just enough” technical sophistication, coupled with carefully planned deception. This case demonstrates that not all threats follow this pattern.  The iPhone has a well-deserved reputation for security.  As the iPhone platform is tightly controlled by Apple, technically sophisticated exploits are often required to enable the remote installation and operation of iPhone monitoring tools. These exploits are rare and expensive. Firms that specialize in acquiring zero-days often pay handsomely for iPhone exploits.  One such firm, Zerodium, acquired an exploit chain similar to the Trident for one million dollars in November 2015. The high cost of iPhone zero-days, the apparent use of NSO Group’s government-exclusive Pegasus product, and prior known targeting of Mansoor by the UAE government provide indicators that point to the UAE government as the likely operator behind the targeting. Remarkably, this case marks the third commercial “lawful intercept” spyware suite employed in attempts to compromise Mansoor.  In 2011, he was targeted with FinFisher’s FinSpy spyware, and in 2012 he was targeted with Hacking Team’s Remote Control System.  Both Hacking Team and FinFisher have been the object of several years of revelations highlighting the misuse of spyware to compromise civil society groups, journalists, and human rights workers.
Paul Merrell

'Shadow Brokers' give away more NSA hacking tools - 0 views

  • The elusive Shadow Brokers didn't have much luck selling the NSA's hacking tools, so they're giving more of the software away -- to everyone. In a Medium post, the mysterious team supplied the password for an encrypted file containing many of the Equation Group surveillance tools swiped back in 2016. Supposedly, the group posted the content in "protest" at President Trump turning his back on the people who voted for him. The leaked data appears to check out, according to researchers, but some of it is a couple of decades old and focused on platforms like Linux. If anything, the leak might backfire. Edward Snowden notes that while the leak is "nowhere near" representing the NSA's complete tool set, there's enough that the NSA should "instantly identify" where and how the kit leaked. This doesn't mean the Shadow Brokers themselves are about to face capture. However, this may give the agency info it needs to both connect the dots (how much of a role did NSA contractor Harold Thomas Martin III play in the online leak, for instance?) and prevent a repeat incident.Does this open a can of worms? It's hard to say -- researchers are still combing over the data. If there are any hacks that can be made useful, though, this could be problematic for server operators worried about cybercrime. If nothing else, it shows that the Shadow Brokers didn't reveal their full hand.
1 - 18 of 18
Showing 20 items per page